Finding Stationary Probability Vector of a Transition Probability Tensor Arising from a Higher-order Markov Chain

نویسندگان

  • Xutao Li
  • Michael Ng
  • Yunming Ye
چکیده

In this paper we develop a new model and propose an iterative method to calculate stationary probability vector of a transition probability tensor arising from a higher-order Markov chain. Existence and uniqueness of such stationary probability vector are studied. We also discuss and compare the results of the new model with those by the eigenvector method for a nonnegative tensor. Numerical examples for ranking and probability estimation are given to illustrate the effectiveness of the proposed model and method.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the Limiting Probability Distribution of a Transition Probability Tensor

In this paper we propose and develop an iterative method to calculate a limiting probability distribution vector of a transition probability tensor P arising from a higher-order Markov chain. In the model, the computation of such limiting probability distribution vector x can be formulated as a Z-eigenvalue problem Pxm−1 = x associated with the eigenvalue 1 of P where all the entries of x are r...

متن کامل

Arrival probability in the stochastic networks with an established discrete time Markov chain

The probable lack of some arcs and nodes in the stochastic networks is considered in this paper, and its effect is shown as the arrival probability from a given source node to a given sink node. A discrete time Markov chain with an absorbing state is established in a directed acyclic network. Then, the probability of transition from the initial state to the absorbing state is computed. It is as...

متن کامل

The Spacey Random Walk: a Stochastic Process for Higher-order Data

Random walks are a fundamental model in applied mathematics and are a common example of a Markov chain. The limiting stationary distribution of the Markov chain represents the fraction of the time spent in each state during the stochastic process. A standard way to compute this distribution for a random walk on a finite set of states is to compute the Perron vector of the associated transition ...

متن کامل

The Spacey Random Walk: A Stochastic Process for Higher-Order Data | SIAM Review | Vol. 59, No. 2 | Society for Industrial and Applied Mathematics

Random walks are a fundamental model in applied mathematics and are a common example of a Markov chain. The limiting stationary distribution of the Markov chain represents the fraction of the time spent in each state during the stochastic process. A standard way to compute this distribution for a random walk on a finite set of states is to compute the Perron vector of the associated transition ...

متن کامل

Analysis of the Spell of Rainy Days in Lake Urmia Basin using Markov Chain Model

In this study, the Frequency and the spell of rainy days was analyzed in Lake Uremia Basin using Markov chain model. For this purpose, the daily precipitation data of 7 synoptic stations in Lake Uremia basin were used for the period 1995- 2014. The daily precipitation data at each station were classified into the wet and dry state and the fitness of first order Markov chain on data series was e...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011